# Instruction Tuning Optimization
Openelm 1 1B
OpenELM is a series of efficient language models introduced by Apple, utilizing a hierarchical scaling strategy to optimize parameter allocation, offering pretrained and instruction-tuned models ranging from 270M to 3B parameters.
Large Language Model
Transformers

O
apple
683
31
Llama 3 EZO VLM 1
A Japanese vision-language model based on Llama-3-8B-Instruct, enhanced with additional pretraining and instruction tuning for improved Japanese capabilities
Image-to-Text Japanese
L
AXCXEPT
19
7
Magnum V2 12b
Apache-2.0
magnum-v2-12b is the fourth model in the series, aiming to replicate the text quality of the Claude 3 series models (especially Sonnet and Opus). It is fine-tuned based on Mistral-Nemo-Base-2407 and has powerful text generation capabilities.
Large Language Model
Safetensors Supports Multiple Languages
M
anthracite-org
18.68k
89
Llama 3 8B Instruct RR
Llama-3-8B-Instruct-RR is a model based on Llama-3. It uses Representation Re-routing (RR) technology to insert circuit breakers, aiming to reduce the generation of harmful content while maintaining the model's capabilities.
Large Language Model
Transformers

L
GraySwanAI
3,676
13
Cambrian 8b
Apache-2.0
Cambrian is an open-source multimodal LLM (Large Language Model) designed with a vision-centric approach.
Text-to-Image
Transformers

C
nyu-visionx
565
63
Recurrentgemma 2b
RecurrentGemma is an open language model family developed by Google based on a novel recurrent architecture, offering both pre-trained and instruction-tuned versions suitable for various text generation tasks.
Large Language Model
Transformers

R
google
1,941
92
Featured Recommended AI Models